The Convex Information Bottleneck Lagrangian

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The information bottleneck method

We define the relevant information in a signal x ∈ X as being the information that this signal provides about another signal y ∈ Y . Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. Understanding the signal x requires more than just predicting y, it also requires specifying wh...

متن کامل

The Deterministic Information Bottleneck

Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek ( 1999 ) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-off between throwing away as many bits as possible and selectively keeping those that are most im...

متن کامل

Convex Optimization and Lagrangian Duality

Finally the Lagranage dual function is given by g(~λ, ~ν) = inf~x L(~x,~λ, ~ν) We now make a couple of simple observations. Observation. When L(·, ~λ, ~ν) is unbounded from below then the dual takes the value −∞. Observation. g(~λ, ~ν) is concave1 as it is the infimum of a set of affine2 functions. If x is feasible solution of program (10.2)(10.4), then we have the following L(x,~λ, ~ν) = f0(x)...

متن کامل

The U-lagrangian of a Convex Function

At a given point p, a convex function f is differentiable in a certain subspace U (the subspace along which ∂f(p) has 0-breadth). This property opens the way to defining a suitably restricted second derivative of f at p. We do this via an intermediate function, convex on U . We call this function the U-Lagrangian; it coincides with the ordinary Lagrangian in composite cases: exact penalty, semi...

متن کامل

The Information Bottleneck EM Algorithm

Learning with hidden variables is a central challenge in probabilistic graphical models that has important implications for many real-life problems. The classical approach is using the Expectation Maximization (EM) algorithm. This algorithm, however, can get trapped in local maxima. In this paper we explore a new approach that is based on the Information Bottleneck principle. In this approach, ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Entropy

سال: 2020

ISSN: 1099-4300

DOI: 10.3390/e22010098